Open Access Journals

Explore All Journals...

Recent Articles >> More

∆DHT-Zip: A Delta-difference Hybrid Tree Coding Scheme for End-to-end Packet Compression Framework in Network-on-Chips

By T. Pullaiah K. Manjunathachari B. L. Malleswari

DOI: https://doi.org/10.5815/ijcnis.2025.02.02, Pub. Date: 8 Apr. 2025

Due to the maximal transistor count, Multi-Processor System-on-Chip (MPSoC) delivers more performance than uniprocessor systems. Network on Chip (NoC) in MPSoC provides scalable connectivity compared to traditional bus-based interconnects. Still, NoC designs significantly impact MPSoC design as it increases power consumption and network latency. A solution to this problem is packet compression which minimizes the data redundancy within NoC packets and reduces the overall power consumption of the whole network by minimizing a data packet size. Latency and overhead of compressor and decompressor require more memory access time, even though the packet compression is good for the improved performance of NoC. So, this problem demands a simple and lightweight compression method like delta compression. Consequently, this research proposes a new delta-difference Hybrid Tree coding (∆DHT-Zip) to de/compress the data packet in the NoC framework. In this compression approach, the Delta encoding, Huffman encoding and DNA tree (deoxyribonucleic acid) coding are hybridized to perform the data packet de/compression approach. Moreover, a time series approach named Run Length Encoding (RLE) is used to compress the metadata obtained from both the encoding and decoding processes. This research produces decreased packet loss and significant power savings by using the proposed ∆DHT-Zip method. The simulation results show that the proposed ∆DHT-Zip algorithm minimizes packet latency and outperforms existing data compression approaches with a mean Compression Ratio (CR) of 1.2%, which is 79.06% greater than the existing Flitzip algorithm. 

[...] Read more.
Robustness Assessment of Data Loss Prevention (DLP) Software for Data Leakage against Different Data Types and Sources

By Ahmet Ali Suzen Osman Ceylan

DOI: https://doi.org/10.5815/ijeme.2025.02.01, Pub. Date: 8 Apr. 2025

Data leakage is the deliberate or accidental transfer of data of institutions or individuals to a different source. Especially, with the increasing use of IT assets after the pandemic, data leaks are more common. Firewalls, anti-virus software, Intrusion Prevention Systems (IPS), or Intrusion Detection Systems (IDS) products are preferred within the network to ensure the security of data sources. However, this type of security software works server-based and often protects the network from outside attacks. It is seen that the main source of data leaks experienced recently is internal vulnerabilities. Data Loss Prevention (DLP), which is the right choice for preventing data leaks, is a system developed to identify, monitor, and protect data in motion or stored in a database. DLPs are preferred to prevent unauthorized distribution of data at the source. DLP software is recommended for technical measures against data security, especially the Personal Data Protection Law (KVKK) in Turkey and General Data Protection Regulation (GDPR) in the European Union. 
Test virtual machines were set up for implementation in real-world scenarios and using personal and corporate data, the behavior and durability of DLP software in cases of unauthorized data upload to USB, CD/DVD, cloud resources, office software, e-mail or ftp server were evaluated.  It was observed that potential leaks and risks occur in data discovery, data masking, data hiding and data encryption according to the data density in data leakage prevention.

[...] Read more.
Industrial Monitoring System with Real-time Alerts and Automated Protection Mechanisms

By Nabusha Alice Asiimwe Julius U. I. Bature Mugisha Simon Tusiime Meron

DOI: https://doi.org/10.5815/ijem.2025.02.05, Pub. Date: 8 Apr. 2025

This work presents the design and prototyping of an Industrial Monitoring and Protection System aimed at enhancing safety and operational efficiency in industrial environments. The system integrates multiple sensors with a GSM module to monitor and respond to critical environmental parameters, such as ambient light levels, temperature, and smoke detection. A Light Dependent Resistor (LDR) is configured to detect excessive lighting levels, interfacing with a microcontroller to activate the GSM module and send alert messages when thresholds are exceeded. The temperature sensor continuously monitors ambient temperature, and upon detecting overheating, the microcontroller triggers the GSM module to notify operators. Similarly, a smoke sensor detects the presence of harmful smoke and initiates an alert through the GSM module for early fire hazard detection. These sensors are connected to the microcontroller via analog and digital input pins, with their outputs processed to enable condition-based responses. A relay switch, controlled by the microcontroller, automatically disconnects connected loads when safety thresholds are breached, preventing equipment damage and ensuring personnel safety. Real-time sensor readings and system status are displayed on an OLED screen, providing operators with comprehensive, up-to-date information on the monitored environment. The system dynamically responds to environmental conditions by triggering alerts and actions based on customizable safety thresholds for light intensity, temperature, and smoke levels. This integrated architecture ensures seamless communication between sensors, the microcontroller, and the GSM module, delivering real-time monitoring, automated protective mechanisms, and early warning capabilities. The proposed system demonstrates the feasibility of affordable and scalable solutions for industrial safety, offering immediate responses to hazardous conditions while minimizing downtime. Furthermore, its adaptable design allows for customization across different industrial environments, making it suitable for a wide range of applications.

[...] Read more.
Brain Tissue Segmentation from the MR Images Affected by Noise and Intensity Inhomogeneity Using a Novel Linguistic Fuzzifier-Based FCM Algorithm

By Sandhya Gudise

DOI: https://doi.org/10.5815/ijigsp.2025.02.04, Pub. Date: 8 Apr. 2025

Brain MRI is mainly affected by noise and intensity inhomogeneity (IIH) during its acquisition. Brain tissue segmentation plays an important role in biomedical research and clinical applications. Brain tissue segmentation is essential for physicians for the proper diagnosis and right treatment of brain-related disorders. Fuzzy C-means (FCM) clustering is one of the widely used algorithms for brain tissue segmentation. Traditional FCM has the limitations of misclassification of pixels that leads to inaccurate cluster centers. Due to this, it is unable to address the issues of noise and IIH. In FCM there exists uncertainty in controlling the fuzziness of the clusters as the fuzzifier is fixed. This paper proposed a novel linguistic fuzzifier-based FCM (LFFCM) to overcome the limitations of traditional FCM during brain tissue segmentation from the MR images. In this method, a linguistic fuzzifier is used instead of a fixed fuzzifier. The spatial information incorporated in the membership function can reduce the misclassification of pixels. The proposed LFFCM can handle IIH, due to having highly accurate cluster centers. The inclusion of the adaptive weights in the membership function results in accurate cluster centers.  Various brain MR images are used to evaluate the proposed technique and the results are compared with some state-of-the-art techniques. The results reveal that the proposed method performed better than the other.

[...] Read more.
Classification of Multilingual Financial Tweets Using an Ensemble Approach Driven by Transformers

By Rupam Bhattacharyya

DOI: https://doi.org/10.5815/ijieeb.2025.02.02, Pub. Date: 8 Apr. 2025

There is a growing interest in multilingual tweet analysis through advanced deep learning techniques. Identifying the sentiments of Twitter (currently known as X) users during the IPO (Initial Public Offering) is an important application area in the financial domain. The number of research works in this domain is less. In this paper, we introduced a multilingual dataset entitled as LIC IPO dataset. This work also offers a modified majority voting-based ensemble technique in addition to our proposed dataset. This test-time ensembling technique is driven by fine-tuning of state-of-the-art transformer-based pretrained language models used in multilingual natural language processing (NLP) research. Our technique has been employed to perform sentiment analysis over LIC IPO dataset. Performance evaluation of our technique along with five transformer-based multilingual NLP models over this dataset has been reported in this paper. These five models are namely a) Bernice, b) TwHIN-BERT, c) MuRIL, d) mBERT, and e) XLM-RoBERTa. It is found that our test-time ensemble technique solves this multi-class sentiment classification problem defined over the proposed dataset in a better way as compared to individual transformer models. Encouraging experimental outcomes confirms the efficacy of the proposed approach

[...] Read more.
Green AI Practices in Multi-objective Hyperparameter Optimization for Sustainable Machine Learning

By K. Jegadeeswari R. Rathipriya

DOI: https://doi.org/10.5815/ijitcs.2025.02.01, Pub. Date: 8 Apr. 2025

The hyperparameter tuning process is an essential step for ML model optimization, as it is necessary to improve model performance. However, this enhancement involves high computational resources and time costs. Model tuning can significantly raise energy consumption and consequently increase carbon emissions.  Therefore, there is an essential need to construct a new framework for this challenge by adding carbon emissions as a vital consideration along with performance. The paper proposes a novel Sustainable Hyperparameter Optimization (SHPO) framework that uses an optimized multi-objective fitness approach. The framework focuses on ensemble classification models (ECMs) namely, Random Forest, ExtraTrees, XGBoost, and AdaBoost. All these models will be optimized using traditional and advanced techniques like Optuna, Hyperopt, and Grid Search. The proposed framework tracks carbon emissions during model hyperparameter tuning. The methodology uses the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) as a method of multi-criteria decision-making (MCDM). This TOPSIS method ranks the hyperparameter sets based on both accuracy and carbon emissions. The objective of the multi-objective fitness approach is to reach the best parameter set with high accuracy and low carbon emissions. It is observed from the experimental results that Optuna based Hyperparameter optimization consistently produced low carbon emissions and achieved high predictive accuracy across the majority of benchmark hyperparameter setups for the models.

[...] Read more.
Copyright Protection and Illegal Distributor Identification for Video-on-demand Applications using Forensic Watermarking

By Ayesha Shaik Masilamani V.

DOI: https://doi.org/10.5815/ijisa.2025.02.03, Pub. Date: 8 Apr. 2025

In the direct-to-home (DTH) environment video-on-demand (VOD) applications are tremendously popular due to its inexpensive and convenient nature.  In VOD approach legal customers can connect their set-top boxes (STB) to the Internet and can access or record the available content. Due to the easy transmission of the highest quality digital data to the customers by the pay-per-view approach, the data are highly at risk. The data can be vulnerable for illegal distribution of duplicate copies and they are prone to unnecessary modifications which creates a financial loss to the information creators. So it is necessary to authenticate the owner as well as the illegal distributor to reduce the digital piracy which is the motivation for this work. This paper presents a forensic watermarking scheme for protecting copyrights, and for identifying the illegal distributor who distributes the legal copy in the illegal fashion though it is copyright violation. In this paper, two watermarks are embedded in the video that is on-demand, where one watermark is the owner’s information and another watermark is related to the unique information of the STB.  This work is also suitable for the biomedical domain, where one watermark can be the patient information and another watermark will be the health center information, in order to secure the patient information and the hospital information.

[...] Read more.
An Extended Symbolic-Arithmetic Model for Teaching Double-Black Removal with Rotation in Red-Black Trees

By Kennedy E Ehimwenma Hongyu Zhou Junfeng Wang Ze Zheng

DOI: https://doi.org/10.5815/ijmsc.2025.01.01, Pub. Date: 8 Apr. 2025

Double-black (DB) nodes have no place in red-black (RB) trees. So when DB nodes are formed, they are immediately removed. The removal of DB nodes that cause rotation and recoloring of other connected nodes poses greater challenges in the teaching and learning of RB trees. To ease this difficulty, this paper extends our previous work on the symbolic arithmetic algebraic (SA) method for removing DB nodes. The SA operations that are given as, Red + Black = Black; Black - Black = Red; Black + Black = DB; and DB – Black = Black removes DB nodes and rebalances black heights in RB trees. By extension, this paper projects three SA mathematical equations, namely, general symbolic arithmetic rule, ∆_([DB,r,p]); partial symbolic arithmetic rule1, ∂_([DB,p])^'; and partial symbolic arithmetic rule2, ∂_([r])^''. The removal of a DB node ultimately affects black heights in RB trees. To balance black heights using the SA equations, all the RB tree cases, namely, LR, RL, LL, and RR, were considered in this work; and the position of the nodes connected directly or indirectly to the DB node was also tested. In this study, to balance a RB tree, the issues considered w.r.t. the different cases of the RB tree were i) whether a DB node has an inner, outer, or both inner and outer black nephews; or ii) ) whether a DB node has an inner, outer or both inner and outer red nephews. The nephews r and x in this work are the children of the sibling s to a DB, and further up the tree, the parent p of a DB is their grandparent g. Thus, r and x have indirect relationships to a DB at the point of formation of the DB node. The novelty of the SA equations is in their effectiveness in the removal of DB that involves rotation of nodes as well as the recoloring of nodes along any simple path so as to balance black heights in a tree. Our SA methods assert when, where, and how to remove a DB node and the nodes to recolor. As shown in this work, the SA algorithms have been demonstrated to be faster in performance w.r.t. to the number of steps taken to balance a RB tree when compared to the traditional RB algorithm for DB removal. The simplified and systematic approach of the SA methods has enhanced student learning and understanding of DB node removal in RB trees.

[...] Read more.
Toxicity Detection Using TextBlob Sentiment Analysis for Location-Centered Tweets

By Varun Mishra Tejaswita Garg

DOI: https://doi.org/10.5815/ijmecs.2025.02.06, Pub. Date: 8 Apr. 2025

The toxic comment detection over the internet through social networking posts found hatred comments and apply certain limitations to stop the negative impact of that information in our society. In order to perform sentiment analysis, NLP text classification approach is very effective. In this paper, we design a specific algorithm using Convolution Neural Network (CNN) approach and perform TextBlob sentiment analysis to evaluate the polarity and subjectivity analysis of posted tweets or comments. This paper can also filter the tweets collected over different locations formed Twitter dataset and then model is evaluated in terms of accuracy, precision, recall and f1-score as calculated results of 0.984, 0.887, 0.905 and 0.895 respectively for the analysis of toxic/non-toxic comment identification. Hence, our algorithm utilized NLTK and TextBlob libraries and suggests that the analyzed post can be recommended to the others or not.

[...] Read more.
Evaluation of Machine Learning Algorithms for Malware Detection: A Comprehensive Review

By Sadia Haq Tamanna Muhammad Muhtasim Aroni Saha Prapty Amrin Nahar Md. Tanvir Ahmed Tagim Fahmida Rahman Moumi Shadia Afrin

DOI: https://doi.org/10.5815/ijwmt.2025.02.05, Pub. Date: 8 Apr. 2025

Malware outperforms conventional signature-based techniques by posing a dynamic and varied threat to digital environments. In cybersecurity, machine learning has become a potent device, providing flexible and data-driven models for malware identification. The significance of choosing the optimal method for this purpose is emphasized in this review paper. Assembling various datasets comprising benign and malicious samples is the first step in the research process. Important data pretreatment procedures like feature extraction and dimensionality reduction are also included. Machine learning techniques, ranging from decision trees to deep learning models, are evaluated based on metrics like as accuracy, precision, recall, F1-score, and ROC-AUC, which determine how well they distinguish dangerous software from benign applications. A thorough examination of numerous studies shows that the Random Forest algorithm is the most effective in identifying malware. Because Random Forest can handle complex and dynamic malware so well, it performs very well in batch and real-time scenarios. It also performs exceptionally well in static and dynamic analysis circumstances. This study emphasizes how important machine learning is, and how Random Forest is the basis for creating robust malware detection. Its effectiveness, scalability, and adaptability make it a crucial tool for businesses and individuals looking to protect sensitive data and digital assets. In conclusion, by highlighting the value of machine learning and establishing Random Forest as the best-in-class method for malware detection, this review paper advances the subject of cybersecurity. Ethical and privacy concerns reinforce the necessity for responsible implementation and continuous research to tackle the changing malware landscape.

[...] Read more.

More...